- AI
- AI
- blog
Members of the Page community gathered at the London Stock Exchange Group for a Page Insight Forum on AI, geopolitics and C-Suite challenges. CCOs compared notes pressure-tested assumptions about what AI adoption really means for their roles, their teams and their organizations. As the conversations progressed, one thing became clear: the discussion about AI has already moved past “what if” and firmly into “how do we.”
The energy and insights from the presentations and working sessions were palpable, which comes through in the following Chatham House insights we are sharing with the broader community, showing where the profession is headed next.
At the Page Insight Forum on AI at the London Stock Exchange Group, CCOs were not asking if AI belongs in their organizations. They were debating how fast, how far and under whose control.
One line cut through the noise: “AI will not improve bad communicators; it will only make them faster.”
That insight reframed the day. AI accelerates output, not judgment. It scales what already exists, good or bad. The value of the forum wasn’t in learning new tools, but in the collective realization that the most important parts of the CCO role remain stubbornly human, proving truly valuable.
As AI systems become easier to use and increasingly automated, the need for the ability to craft the perfect prompt will fade into the background. The real differentiator is already emerging: context engineering: the discipline of deliberately shaping content for specific stakeholders by embedding intent, history, power dynamics and risk awareness, so meaning and trust are preserved even when messages are produced at machine speed.
CCOs create value in this world by supplying what AI cannot; strategic intent, organizational memory, political awareness and cultural nuance. Without that context, even the most fluent output is directionless or risky.
Participants shared examples of stress-testing messages through “hostile journalist” lenses or simulated stakeholder reactions. The lesson was consistent: the quality of the output depended less on the model and more on the framing.
The rule that followed was simple and unanimous: humans must remain the final decision-makers. AI can suggest, summarize and simulate. Humans own judgment, risk tolerance and responsibility.
The New Non-Negotiable Roles
When the group imagined an AI-native communications team built from scratch, a few roles stood out.
The Head of Trust. As AI increases speed and scale, the cost of getting it wrong rises. Trust, authenticity and stakeholder vulnerability can no longer be managed on the margins. They require clear ownership and constant attention.
The Data Analyst. Public LLMs level the playing field. Advantage comes from proprietary insight. Teams need people who can connect internal data, extract meaning and translate signals into strategy.
Then there was the uncomfortable question of junior talent. If AI handles first drafts and basic execution, where do early-career communicators learn judgment? The answer isn’t to preserve outdated tasks. It’s to redesign training around evaluation, verification and decision-making — the skills that matter most when speed is no longer paramount.
AI doesn’t replace the communicator; it removes friction.
By automating routine work, it creates space for higher-level thinking: counsel over content, judgment over output. As execution becomes easier, leadership becomes more important.
The future function favors generalists over narrow specialists and demands deeper embedding of CCOs within business leadership teams. Context, trust and judgment will be first in the toolkit.
Participants left the sessions with clear next steps. Move beyond experimentation. Design AI-native workflows with intention. Put trust and human judgment at the center. In a world where content is cheap and quick, those are the assets that compound.
Join the conversation on social media.